Halley's Method
   HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, Halley's method is a
root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
used for functions of one real variable with a continuous second derivative. It is named after its inventor
Edmond Halley Edmond (or Edmund) Halley (; – ) was an English astronomer, mathematician and physicist. He was the second Astronomer Royal in Britain, succeeding John Flamsteed in 1720. From an observatory he constructed on Saint Helena in 1676–77, H ...
. The algorithm is second in the class of
Householder's method In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order . Each of these methods is chara ...
s, after
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. Like the latter, it iteratively produces a sequence of approximations to the root; their
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
to the root is cubic. Multidimensional versions of this method exist. Halley's method exactly finds the roots of a linear-over-linear Padé approximation to the function, in contrast to
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
or the
Secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation o ...
which approximate the function linearly, or
Muller's method Muller's method is a root-finding algorithm, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant method, which constructs at every itera ...
which approximates the function quadratically.


Method

Edmond Halley was an English mathematician who introduced the method now called by his name. Halley's method is a numerical algorithm for solving the nonlinear equation ''f''(''x'') = 0. In this case, the function ''f'' has to be a function of one real variable. The method consists of a sequence of iterations: :x_ = x_n - \frac beginning with an initial guess ''x''0. If ''f'' is a three times continuously differentiable function and ''a'' is a zero of ''f'' but not of its derivative, then, in a neighborhood of ''a'', the iterates ''xn'' satisfy: :, x_ - a , \le K \cdot ^3,\textK > 0. This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence is cubic. The following alternative formulation shows the similarity between Halley's method and Newton's method. The expression f(x_n)/f'(x_n) is computed only once, and it is particularly useful when f''(x_n)/f'(x_n) can be simplified: :x_ = x_n -\frac= x_n - \frac \left - \frac \cdot \frac \right. When the
second derivative In calculus, the second derivative, or the second order derivative, of a function is the derivative of the derivative of . Roughly speaking, the second derivative measures how the rate of change of a quantity is itself changing; for example, ...
is very close to zero, the Halley's method iteration is almost the same as the Newton's method iteration.


Derivation

Consider the function :g(x) = \frac . Any root of ''f'' which is ''not'' a root of its derivative is a root of ''g''; and any root ''r'' of ''g'' must be a root of ''f'' provided the derivative of ''f'' at ''r'' is not infinite. Applying
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
to ''g'' gives :x_ = x_n - \frac with :g'(x) = \frac, and the result follows. Notice that if ''f′ ''(''c'') = 0, then one cannot apply this at ''c'' because ''g''(''c'') would be undefined.


Cubic convergence

Suppose ''a'' is a root of ''f'' but not of its derivative. And suppose that the third derivative of ''f'' exists and is continuous in a neighborhood of ''a'' and ''xn'' is in that neighborhood. Then
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the t ...
implies: :0 = f(a) = f(x_n) + f'(x_n) (a - x_n) + \frac (a - x_n)^2 + \frac (a - x_n)^3 and also :0 = f(a) = f(x_n) + f'(x_n) (a - x_n) + \frac (a - x_n)^2, where ξ and η are numbers lying between ''a'' and ''xn''. Multiply the first equation by 2f'(x_n) and subtract from it the second equation times f''(x_n)(a - x_n) to give: :\begin 0 &= 2 f(x_n) f'(x_n) + 2
'(x_n) The apostrophe ( or ) is a punctuation mark, and sometimes a diacritical mark, in languages that use the Latin alphabet and some other alphabets. In English, the apostrophe is used for two basic purposes: * The marking of the omission of one o ...
2 (a - x_n) + f'(x_n) f''(x_n) (a - x_n)^2 + \frac (a - x_n)^3 \\ &\qquad- f(x_n) f''(x_n) (a - x_n) - f'(x_n) f''(x_n) (a - x_n)^2 - \frac (a - x_n)^3. \end Canceling f'(x_n) f''(x_n) (a - x_n)^2 and re-organizing terms yields: :0 = 2 f(x_n) f'(x_n) + \left(2
'(x_n) The apostrophe ( or ) is a punctuation mark, and sometimes a diacritical mark, in languages that use the Latin alphabet and some other alphabets. In English, the apostrophe is used for two basic purposes: * The marking of the omission of one o ...
2 - f(x_n) f''(x_n) \right) (a - x_n) + \left(\frac - \frac \right) (a - x_n)^3. Put the second term on the left side and divide through by : 2
'(x_n) The apostrophe ( or ) is a punctuation mark, and sometimes a diacritical mark, in languages that use the Latin alphabet and some other alphabets. In English, the apostrophe is used for two basic purposes: * The marking of the omission of one o ...
2 - f(x_n) f''(x_n) to get: :a - x_n = \frac - \frac (a - x_n)^3. Thus: :a - x_ = - \frac (a - x_n)^3. The limit of the coefficient on the right side as is: :-\frac. If we take ''K'' to be a little larger than the absolute value of this, we can take absolute values of both sides of the formula and replace the absolute value of coefficient by its upper bound near ''a'' to get: :, a - x_, \leq K , a - x_n, ^3 which is what was to be proved. To summarize, :\Delta x_ =\frac (\Delta x_i)^3 + O
Delta x_i Delta commonly refers to: * Delta (letter) (Δ or δ), a letter of the Greek alphabet * River delta, at a river mouth * D (NATO phonetic alphabet: "Delta") * Delta Air Lines, US * Delta variant of SARS-CoV-2 that causes COVID-19 Delta may also r ...
4, \qquad \Delta x_i \triangleq x_i - a.


References


External links

* *
Newton's method and high order iterations
', Pascal Sebah and Xavier Gourdon, 2001 (the site has a link to a Postscript version for better formula display) {{DEFAULTSORT:Halley's Method Root-finding algorithms